148 research outputs found
Kalman Filtering With Relays Over Wireless Fading Channels
This note studies the use of relays to improve the performance of Kalman
filtering over packet dropping links. Packet reception probabilities are
governed by time-varying fading channel gains, and the sensor and relay
transmit powers. We consider situations with multiple sensors and relays, where
each relay can either forward one of the sensors' measurements to the
gateway/fusion center, or perform a simple linear network coding operation on
some of the sensor measurements. Using an expected error covariance performance
measure, we consider optimal and suboptimal methods for finding the best relay
configuration, and power control problems for optimizing the Kalman filter
performance. Our methods show that significant performance gains can be
obtained through the use of relays, network coding and power control, with at
least 30-40 less power consumption for a given expected error covariance
specification.Comment: 7 page
Optimal Energy Allocation for Kalman Filtering over Packet Dropping Links with Imperfect Acknowledgments and Energy Harvesting Constraints
This paper presents a design methodology for optimal transmission energy
allocation at a sensor equipped with energy harvesting technology for remote
state estimation of linear stochastic dynamical systems. In this framework, the
sensor measurements as noisy versions of the system states are sent to the
receiver over a packet dropping communication channel. The packet dropout
probabilities of the channel depend on both the sensor's transmission energies
and time varying wireless fading channel gains. The sensor has access to an
energy harvesting source which is an everlasting but unreliable energy source
compared to conventional batteries with fixed energy storages. The receiver
performs optimal state estimation with random packet dropouts to minimize the
estimation error covariances based on received measurements. The receiver also
sends packet receipt acknowledgments to the sensor via an erroneous feedback
communication channel which is itself packet dropping.
The objective is to design optimal transmission energy allocation at the
energy harvesting sensor to minimize either a finite-time horizon sum or a long
term average (infinite-time horizon) of the trace of the expected estimation
error covariance of the receiver's Kalman filter. These problems are formulated
as Markov decision processes with imperfect state information. The optimal
transmission energy allocation policies are obtained by the use of dynamic
programming techniques. Using the concept of submodularity, the structure of
the optimal transmission energy policies are studied. Suboptimal solutions are
also discussed which are far less computationally intensive than optimal
solutions. Numerical simulation results are presented illustrating the
performance of the energy allocation algorithms.Comment: Submitted to IEEE Transactions on Automatic Control. arXiv admin
note: text overlap with arXiv:1402.663
Game-Theoretic Pricing and Selection with Fading Channels
We consider pricing and selection with fading channels in a Stackelberg game
framework. A channel server decides the channel prices and a client chooses
which channel to use based on the remote estimation quality. We prove the
existence of an optimal deterministic and Markovian policy for the client, and
show that the optimal policies of both the server and the client have threshold
structures when the time horizon is finite. Value iteration algorithm is
applied to obtain the optimal solutions for both the server and client, and
numerical simulations and examples are given to demonstrate the developed
result.Comment: 6 pages, 4 figures, accepted by the 2017 Asian Control Conferenc
An Optimal Transmission Strategy for Kalman Filtering over Packet Dropping Links with Imperfect Acknowledgements
This paper presents a novel design methodology for optimal transmission
policies at a smart sensor to remotely estimate the state of a stable linear
stochastic dynamical system. The sensor makes measurements of the process and
forms estimates of the state using a local Kalman filter. The sensor transmits
quantized information over a packet dropping link to the remote receiver. The
receiver sends packet receipt acknowledgments back to the sensor via an
erroneous feedback communication channel which is itself packet dropping. The
key novelty of this formulation is that the smart sensor decides, at each
discrete time instant, whether to transmit a quantized version of either its
local state estimate or its local innovation. The objective is to design
optimal transmission policies in order to minimize a long term average cost
function as a convex combination of the receiver's expected estimation error
covariance and the energy needed to transmit the packets. The optimal
transmission policy is obtained by the use of dynamic programming techniques.
Using the concept of submodularity, the optimality of a threshold policy in the
case of scalar systems with perfect packet receipt acknowledgments is proved.
Suboptimal solutions and their structural results are also discussed. Numerical
results are presented illustrating the performance of the optimal and
suboptimal transmission policies.Comment: Conditionally accepted in IEEE Transactions on Control of Network
System
Deep Reinforcement Learning for Wireless Sensor Scheduling in Cyber-Physical Systems
In many Cyber-Physical Systems, we encounter the problem of remote state
estimation of geographically distributed and remote physical processes. This
paper studies the scheduling of sensor transmissions to estimate the states of
multiple remote, dynamic processes. Information from the different sensors have
to be transmitted to a central gateway over a wireless network for monitoring
purposes, where typically fewer wireless channels are available than there are
processes to be monitored. For effective estimation at the gateway, the sensors
need to be scheduled appropriately, i.e., at each time instant one needs to
decide which sensors have network access and which ones do not. To address this
scheduling problem, we formulate an associated Markov decision process (MDP).
This MDP is then solved using a Deep Q-Network, a recent deep reinforcement
learning algorithm that is at once scalable and model-free. We compare our
scheduling algorithm to popular scheduling algorithms such as round-robin and
reduced-waiting-time, among others. Our algorithm is shown to significantly
outperform these algorithms for many example scenarios
Estimation of Scalar Field Distribution in the Fourier Domain
In this paper we consider the problem of estimation of scalar field
distribution collected from noisy measurements. The field is modelled as a sum
of Fourier components/modes, where the number of modes retained and estimated
determines in a natural way the approximation quality. An algorithm for
estimating the modes using an online optimization approach is presented, under
the assumption that the noisy measurements are quantized. The algorithm can
estimate time-varying fields through the introduction of a forgetting factor.
Simulation studies demonstrate the effectiveness of the proposed approach
Probability of Error Analysis for Hidden Markov Model Filtering With Random Packet Loss
This paper studies the probability of error for maximum
a posteriori (MAP) estimation of hidden Markov models,
where measurements can be either lost or received according to another
Markov process. Analytical expressions for the error probabilities
are derived for the noiseless and noisy cases. Some relationships
between the error probability and the parameters of the
loss process are demonstated via both analysis and numerical results.
In the high signal-to-noise ratio (SNR) regime, approximate
expressions which can be more easily computed than the exact analytical
form for the noisy case are presented
- …